A direct search method for smooth and nonsmooth unconstrained optimization

نویسندگان

  • C. J. Price
  • M. Reale
  • B. L. Robertson
چکیده

A derivative free frame based method for minimizing C1 and nonsmooth functions is described. A ‘black-box’ function is assumed with gradients being unavailable. The use of frames allows gradient estimates to be formed. At each iteration a ray search is performed either along a direct search quasi-Newton direction, or along the ray through the best frame point. The use of randomly oriented frames and random perturbations is examined, the latter yielding a convergence proof on non-smooth problems. Numerical results on non-smooth problems show that the method is effective, and that, in practice, the random perturbations are more important than randomly orienting the frames. The method is applicable to nonlinear `1 and `∞ data fitting problems, and other nonsmooth problems. See http://anziamj.austms.org.au/ojs/index.php/ANZIAMJ/article/view/95 for this article, c © Austral. Mathematical Soc. 2008. Published February 27, 2008. ISSN 1446-8735

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A modified Polak-Ribière-Polyak conjugate gradient algorithm for nonsmooth convex programs

The conjugate gradient (CG) method is one of the most popular methods for solving smooth unconstrained optimization problems due to its simplicity and low memory requirement. However, the usage of CG methods are mainly restricted in solving smooth optimization problems so far. The purpose of this paper is to present efficient conjugate gradient-type methods to solve nonsmooth optimization probl...

متن کامل

Smoothing and Worst-Case Complexity for Direct-Search Methods in Nonsmooth Optimization

In the context of the derivative-free optimization of a smooth objective function, it has been shown that the worst case complexity of direct-search methods is of the same order as the one of steepest descent for derivative-based optimization, more precisely that the number of iterations needed to reduce the norm of the gradient of the objective function below a certain threshold is proportiona...

متن کامل

A Free Line Search Steepest Descent Method for Solving Unconstrained Optimization Problems

In this paper, we solve unconstrained optimization problem using a free line search steepest descent method. First, we propose a double parameter scaled quasi Newton formula for calculating an approximation of the Hessian matrix. The approximation obtained from this formula is a positive definite matrix that is satisfied in the standard secant relation. We also show that the largest eigen value...

متن کامل

Frames and Grids in Unconstrained and Linearly Constrained Optimization: A Nonsmooth Approach

This paper describes a class of frame-based direct search methods for unconstrained and linearly constrained optimization. A template is described and analyzed using Clarke’s nonsmooth calculus. This provides a unified and simple approach to earlier results for gridand frame-based methods, and also provides partial convergence results when the objective function is not smooth, undefined in some...

متن کامل

A New Hybrid Conjugate Gradient Method Based on Eigenvalue Analysis for Unconstrained Optimization Problems

In this paper‎, ‎two extended three-term conjugate gradient methods based on the Liu-Storey ({tt LS})‎ ‎conjugate gradient method are presented to solve unconstrained optimization problems‎. ‎A remarkable property of the proposed methods is that the search direction always satisfies‎ ‎the sufficient descent condition independent of line search method‎, ‎based on eigenvalue analysis‎. ‎The globa...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008